high commissioner
How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity
Alexander, Heather J., Simon, Jonathan A., Pinard, Frédéric
The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.
- Asia (1.00)
- Oceania (0.67)
- North America > United States > New York (0.46)
- (3 more...)
- Personal (0.67)
- Overview (0.67)
- Research Report (0.50)
- Summary/Review (0.45)
- Law > Statutes (1.00)
- Law > Litigation (1.00)
- Law > International Law (1.00)
- (11 more...)
Nigeria, India strengthen ties on artificial intelligence, solar energy – Businessamlive
Nigeria and India are moving to strengthen ties in areas of fintech, artificial intelligence, scientific development and solar energy, according to Gangadharan Balasubramanian, Indian high commissioner to Nigeria. The newly appointed envoy, who disclosed this during the commemoration of India's 76th Independence in Abuja on Monday, said the partnership between would further strengthen bilateral ties between the two countries. Balasubramanian noted that the trade and economic relations between India and Nigeria have been very strong, with over 135 Indian companies operating in Nigeria. He also said the volume of trade between both countries has increased as well as improved on both sides after the COVID-19 pandemic. "The trade volume between India and Nigeria was $14.95 billion in 2021. The trade volume has increased substantially after COVID-19, both ways," Balasubramanian said.
- Health & Medicine (1.00)
- Energy > Renewable > Solar (1.00)
AI and human rights - a different take on an old debate
On September 15 2021, the UN issued a statement that AI must not interfere with human rights. This isn't a new sentiment - last year, a similar pronouncement was issued: Michelle Bachelet, UN High Commissioner for Human Rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces. It also said Wednesday that countries should expressly ban AI applications which don't comply with international human rights law. As part of its work on technology and human rights, the UN Human Rights Office has today published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people's right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression. Applications that should be prohibited include government "social scoring" systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
- Asia > China (0.06)
- North America > United States > Mississippi (0.05)
- Europe > United Kingdom > England (0.05)
- (3 more...)
La veille de la cybersécurité
Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warned the High Commissioner: "The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be". Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law, to be banned. "Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights". On Tuesday, the UN rights chief expressed concern about the « unprecedented level of surveillance across the globe by state and private actors », which she insisted was « incompatible » with human rights.
The U.N. Warns That AI Can Pose A Threat To Human Rights
The United Nations High Commissioner for Human Rights Michelle Bachelet speaks at a climate event in Madrid in 2019. A recent report of hers warns of the threats that AI can pose to human rights. The United Nations High Commissioner for Human Rights Michelle Bachelet speaks at a climate event in Madrid in 2019. A recent report of hers warns of the threats that AI can pose to human rights. The United Nations' human rights chief has called on member states to put a moratorium on the sale and use of artificial intelligence systems until the "negative, even catastrophic" risks they pose can be addressed. The remarks by U.N. High Commissioner for Human Rights Michelle Bachelet were in reference to a new report on the subject released in Geneva.
Urgent action needed over artificial intelligence risks to human rights
Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warned the High Commissioner: "The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be". Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law, to be banned. "Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights". On Tuesday, the UN rights chief expressed concern about the "unprecedented level of surveillance across the globe by state and private actors", which she insisted was "incompatible" with human rights.